Categories

Versions

Question Answering (Generative Models)

Synopsis

Applies a Question Answering model

Description

Applies a Question Answering model. These models can answer questions about a given context. For example, if the context is “My name is Ingo, and I live in Houston” and the question is “Where do I live?” the model would produce the answer “Houston”. Unlike most other task operators, these models do not accept a dynamic prompt, but the data set needs to provide two data columns. One column contains the context about which the questions will be asked while the other column contains the questions to ask. For example, a context could be “I am in Ingo, and I live in Houston” while the question could be “Where do I live?”.

Input

  • data (Data Table)

    The data which will be injected into the prompt. Column values of the data set can get accessed with [[column_name]] in the prompt.

  • model (File)

    The optional model directory (in your project / repository or your file system). Has to be provided if the parameter "use local model" is true. Typically, this is only necessary if you want to use your own finetuned local version of a model.

Output

  • data (Data Table)

    The input data plus a new column (or several ones) which are the result of the prompts sent to the model.

  • model (File)

    The model directory which has been delivered as input.

Parameters

  • use_local_model Indicates if a local model should be used based on a local file directory or if a model should be used from the Huggingface portal. If a local model is to be used, all task operators require a file object referencing to the model directory as a second input. If this parameter is unchecked, you will need to specify the full model name coming from the Huggingface portal for the “model” parameter. Range:
  • model The model from the Huggingface portal which will be used by the operator. Only used when the “use local model” parameter is unchecked. The model name needs to be the full model name as found on each model card on the Huggingface portal. Please be aware that using large models can result in downloads of many gigabytes of data and models will be stored in a local cache. Range:
  • name The name of the new column which will be created as a result. Range:
  • question This column needs to contain the questions. Range:
  • context This column needs to contain the context about which the questions are asked. Range:
  • top_k Defines how many solutions are created by the model. If more than one solution is created, then multiple result columns will be created as well. Range:
  • allow_impossible Indicates if impossible or an empty answer is allowed as a result. Range:
  • max_question_tokens The maximum number of tokens produced as input by the model. Note that some models can only work with specific maximum numbers of tokens. Please refer to the model documentation pages on Huggingface for more information about such limits. Range:
  • max_target_tokens The maximum number of tokens produced as output by the model. Note that some models can only work with specific maximum numbers of tokens. Please refer to the model documentation pages on Huggingface for more information about such limits. Range:
  • device Where the finetuning should take place. Either on a GPU, a CPU, or Apple’s MPS architecture. If set to Automatic, the training will prefer the GPU if available and will fall back to CPU otherwise. Range:
  • device_indices If you have multiple GPUs and computation is set up to happen on GPUs you can specify which ones are used with this parameter. Counting of devices starts with 0. The default of “0” means that the first GPU device in the system will be used, a value of “1” would refer to the second and so on. You can utilize multiple GPUs by providing a comma-separated list of device indices. For example, you could use “0,1,2,3” on a machine with four GPUs if all four should be utilized. Please note that RapidMiner performs data-parallel computation which means that the model needs to be small enough to be completely loaded on each of your GPUs. Range:
  • data_type Specifies the data type under which the model should be loaded. Using lower precisions can reduce memory usage while leading to slightly less accurate results in some cases. If set to “auto” the data precision is derived from the model itself. Range:
  • revision The specific model version to use. The default is “main”. The value can be a branch name, a tag name, or a commit id of the model in the Huggingface git repository. Range:
  • trust_remote_code Whether or not to allow for custom code defined on the Hub in their own modeling, configuration, tokenization or even pipeline files. Range:
  • conda_environment The conda environment used for this model task. Additional packages may be installed into this environment, please refer to the extension documentation for additional details on this and on version requirements. Range:

Tutorial Processes

Using a question answering model

This process uses a question answering model. These models require a question and a context column in the input data and will answer the question about the given context. The process creates some questions and contexts and feeds them into the task operator. You can also deliver a local model using the second operator input or specify a different model from Huggingface using the model parameter.